Iman Awaad Research associate, Hochschule Bonn-Rhein-Sieg
It is possible to commit no mistakes and still lose...
TBD
Gerald Steinbauer-Wagner Associate professor, Graz University of Technology
Dependable Robots through Model-Based Techniques
Although there are tons of impressive videos out there surprisingly enough autonomous robots frequently fail in performing moderate complex every day tasks in every day environments. The reasons are manyfold and originate mainly from the perception-decision-action cycle interacting with the environment. Having an insight for the root causes is valuable for the robot itself to act dependably, but it is also important for the developer as intelligent robots are complex constructs which often lack introspection. The DX community focuses their effort to run-time verification (e.g. monitoring and diagnosis) in order to improve the dependability of such complex machines. In this talk, we like to focus on the development/life cycle of an autonomous robot from the design over the implementation to the deployment. We like to emphasize that all of these stages can benefit from the application of model-based approaches to improve dependability. We will motivate this holistic view and will present techniques developed to tackle the different stages of the life cycle of robot in different domains such as logistics or production.
Sebastian Blumenthal Lead software development engineer, KELO Robotics GmbH
Structured testing - and the surprising failures it can reveal
Structured testing is vital for the development of robotic applications. KELO Robotics has established a test procedure that includes different types of tests: field tests and endurance tests. A field test is a supervised functional test of the complete system in a real environment. Endurance tests are stress tests for individual components or the overall system. The emphasis in the latter tests is on scalability of test duration while achieving a semi-automated evaluation of performance. Both tests have in common that they are performed over a long period of weeks and months, leading to the detection of failures that might otherwise remain hidden. These types of errors range from degradation of sensing capabilities, to wrong assumptions about the system or environment, and even issues with electromagnetic interference.
Marta Romeo Assistant professor, Heriot-Watt University
Failure-in-the-loop HRI: owning mistakes in interaction design
What constitutes a failure when working with interactions between robots and humans? Can we safely assume that humans do not fail when interacting with each other, and can we demand the same from our robotic systems? With this talk, we will explore the role of failures in human-robot interaction, focusing on demystifying the negative preconception surrounding them. By looking at research from the Interaction Design field, we will differentiate between slips and mistakes, map them to robot errors, and discuss how they can drive the interaction. Owning up to a failure could have consequences on the overall rapport between humans and robots. It has the potential to influence trust and provide a valid method to reshape the expectations that naive users have about the technology, which could ultimately downsize the severity of the failure.
Devleena Das PhD candidate, Georgia Institute of Technology
Generating Explanations that Improve User Assistance in Fault Recovery
With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. Explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this talk, I will share how we can present meaningful, human-centered explanations that help improve non-AI expert understanding of robot failures such that users can provide meaningful solutions for robot recovery. Specifically, we will discuss the importance of environmental context for meaningful explanations, and how we can leverage semantic scene graphs to automatically extract contextual information from the environment to produce human-centered explanations for robot failures.
Esra Erdem Professor, Sabanci University
Explainable plan execution monitoring under partial observability
Successful plan generation for autonomous systems is necessary but not sufficient to guarantee reaching a goal state by an execution of a plan. Various discrepancies between an expected state and the observed state may occur during the plan execution (e.g., due to unexpected exogenous events, changes in the goals, or failure of robot parts) and these discrepancies may lead to plan failures. For that reason, autonomous systems should be equipped with execution monitoring algorithms so that they can autonomously recover from such discrepancies. We introduce a plan execution monitoring algorithm that operates under partial observability. This algorithm relies on novel formal methods for hybrid prediction, diagnosis and explanation generation, and planning. The prediction module generates an expected state after the execution of a part of the plan from an incomplete state to check for discrepancies. The diagnostic reasoning module generates meaningful hypotheses to explain failures of robot parts. Unlike the existing diagnosis methods, the previous hypotheses can be revised, based on new partial observations, increasing the accuracy of explanations as further information becomes available. The replanning module considers these explanations while computing a new plan that would avoid such failures. All these reasoning modules are hybrid in that they combine high-level logical reasoning with low-level feasibility checks based on probabilistic methods. We experimentally show that these hybrid formal reasoning modules improve the performance of plan execution monitoring.
Tathagata Chakraborti Research staff member, IBM Research AI
How to react when there is no solution? On large language models and model space reasoning for automated planning
Automated planning, the ability to decide what to do next, is crucial for the design of autonomous systems such as robots. However, when a model produces no plan of action, there is little to no guidance on how to react to it either for the developer of the system or for an autonomous system that can self-adapt to failures. In this talk, we will explore strategies to deal with the unsolvability of planning models and the curious case for the use of large-scale statistical models such as LLMs as an assist for combinatorial search.
Danesh Tarapore Assistant professor, University of Southampton
Resilient robot swarms: from decentralized fault-detection to rapid fault-recovery
Despite over 30 years of research in robot swarms, characterized as groups of robots coordinating to perform a wide array of tasks (e.g., collectively monitoring large environments), existing swarms are unprepared for long-term autonomy: unable to deal with the fragility of robot hardware, and easily challenged by inevitable changes in their functional environments; they are frail systems that cease functioning in difficult conditions. In this talk, I will discuss my attempts to remedy this situation via the development of algorithms for robot fault-detection and behaviour-adaptation. I will introduce bio-inspired algorithms that, (a) allow a robot swarm to robustly detect common sensor/motor faults in its members, while remaining resilient to changes in their exhibited behaviours (e.g., due to online learning); and (b) assist the faulty robots to adapt to sustained damages by rapidly discovering compensatory behaviours that work despite the damage or changes in their environment. Finally, I will outline my long-term vision towards the real-world applications of these algorithms in environmental monitoring.
Sanne van Waveren Postdoctoral researcher, Incoming at Georgia Institute of Technology
Towards automatically correcting robot failures using human input
Robots should adapt their behavior to new situations and people's preferences while ensuring the safety of the robot and its environment. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors prior to deployment, causing the robot to inevitably fail at some point in time. My research focuses on safe human-robot interaction and how we can correct robots in ways that ensure that the robot will do what we tell it to do, e.g., through formal synthesis. In this talk, I will address how we can 1) shield robots from executing high-level actions that would lead to failure states and 2) how we can incorporate people's feedback in motion planning to increase perceived safety of a drone that flies close to people.
Masha Itkina Research scientist, Stanford University/Toyota Research Institute (TRI)
Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction
Although neural networks have seen tremendous success as predictive models in a variety of domains, they can be overly confident in their predictions on out-of-distribution (OOD) data. To be viable for safety-critical applications in human environments, like autonomous vehicles or assistive robotics, neural networks must accurately estimate their epistemic or model uncertainty, achieving a level of system self-awareness. In this talk, I will present an approach based on evidential deep learning to estimate the epistemic uncertainty over a low-dimensional, interpretable latent space in a trajectory prediction setting. We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among the semantic concepts: past agent behavior, road structure, and social context. We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines. Looking to the future, by enabling uncertainty-aware spatiotemporal inference in robotic systems, I hope to engender safe and socially cohesive human-robot interactions.